5,467 research outputs found

    Cosmic microwave background constraints on light dark matter candidates

    Full text link
    Unveiling the nature of cosmic dark matter (DM) is an urgent issue in cosmology. Here we make use of a strategy based on the search for the imprints left on the cosmic microwave background temperature and polarization spectra by the energy deposition due to annihilations of the most promising dark matter candidate, a stable WIMP of mass 1-20 GeV. A major improvement with respect to previous similar studies is a detailed treatment of the annihilation cascade and its energy deposition in the cosmic gas. This is vital as this quantity is degenerate with the annihilation cross-section . The strongest constraints are obtained from Monte Carlo Markov chain analysis of the combined WMAP7 and SPT datasets up to lmax = 3100. If annihilation occurs via the e+e- channel, a light WIMP can be excluded at the 2 {\sigma} c.l. as a viable DM candidate in the above mass range. However, if annihilation occurs via {\mu}+{\mu}- or {\tau}+{\tau}- channels instead we find that WIMPs with mass > 5 GeV might represent a viable cosmological DM candidate. We compare the results obtained in the present work with those obtained adopting an analytical simplified model for the energy deposition process widely used in literature, and we found that realistic energy deposition descriptions can influence the resulting constrains up to 60%.Comment: 10 pages, 8 figures, 5 tables. Accepted for publication in MNRA

    Confronto di modelli di scheduling: il caso Husqvarna-Motorcycles.

    Get PDF
    Il moderno mondo produttivo è caratterizzato da una crescente ansia competitiva che si traduce in ricerca di sistemi produttivi volti a minimizzare i costi delle attività non a valore aggiunto, che vadano ad unire le esigenze di politiche gestionali basate su economie di scala e di politiche basate sulla reattività e flessibilità. Esigenze di minimizzazione dei costi ed elevata flessibilità sono in generale contrastanti tra di loro, tuttavia può essere raggiunto un buon compromesso tramite un opportuna pianificazione di medio termine. In questo elaborato viene affrontato il caso reale Husqvarna-Motorcycles di pianificazione e scheduling della produzione volto ad ottenere piani fattibili ed economici. All’arrivo in azienda la pianificazione di medio termine dell’assemblaggio veicoli e motori veniva effettuata manualmente con criteri di ottimizzazione legati al buon senso e con l’obiettivo disoddisfare la domanda ed allo stesso tempo saturare il più possibile le risorse produttive. Nella realtà in esame le linee di montaggio veicoli possono essere modellate come una singola macchina su cui viene effettuato un singolo task, con tempi di setup dipendenti e con una finestra temporale di consegna dei job. Ogni job che verrà prodotto al di fuori della finestra temporale sarà soggetto a costi aggiuntivi, che saranno di immobilizzazione e immagazzinamento nel caso in cui il job venga concluso in anticipo, mentre i costi di ritardo non sono valorizzabili ma hanno un forte impatto sull’immagine dell’azienda dato che i job in oggetto nascono da ordini clienti o da previsioni di vendita. Il lavoro è volto ad automatizzare tale procedura, allo scopo di ottimizzare i tempi persi per il setup e minimizzare i costi di magazzino e immobilizzazione e limitando gli eventuali ritardi di consegna del prodotto finito, derivanti da vincoli di capacità produttiva.. La procedura utilizzata consiste nell’uso di un algoritmo formalizzato da Dorigo et al nel 1992 denominata ACS (Ant Colony System) per risolvere il problema della sequenza di setup ottima, e un algoritmo euristico per ricercare lotti economici e per a livellare la produzione per fare in modo che si riesca a produrre quanto richiesto dall’ufficio marketing. L’elaborato riguarderà quindi, la presentazione dell’azienda e del suo sistema produttivo, i concetti generali di programmazione della produzione e di scheduling, e la soluzione adottata per ottenere un piano produttivo fattibile ed economico

    Constraining Warm Dark Matter with high-zz supernova lensing

    Full text link
    We propose a new method to constrain the warm dark matter (WDM) particle mass, mχm_\chi, based on the counts of multiply imaged, distant supernovae (SN) produced by strong lensing by intervening cosmological matter fluctuations. The counts are very sensitive to the WDM particle mass, assumed here to be mχ=1,1.5,2m_\chi=1, 1.5, 2 keV. We use the analytic approach developed by Das & Ostriker to compute the probability density function of the cold dark matter (CDM) convergence (κ\kappa) on the lens plane; such method has been extensively tested against numerical simulations. We have extended this method generalizing it to the WDM case, after testing it against WDM NN-body simulations. Using the observed cosmic star formation history we compute the probability for a distant SN to undergo a strong lensing event in different cosmologies. A minimum observing time of 2 yr (5 yr) is required for a future 100 square degrees survey reaching z4z \approx 4 (z3z \approx 3) to disentangle at 2σ\sigma a WDM (mχ=1m_\chi=1 keV) model from the standard CDM scenario. Our method is not affected by any astrophysical uncertainty (such as baryonic physics effects), and, in principle, it does not require any particular dedicated survey strategy, as it may come as a byproduct of a future SN survey.Comment: 7 pages, 7 figures, 1 table. Accepted for publication in MNRA

    Dynamic set-up rules for hybrid flow shop scheduling with parallel batching machines

    Get PDF
    An S-stage hybrid (or flexible) flow shop, with sequence-independent uniform set-up times, parallel batching machines with compatible parallel batch families (like in casting or heat treatments in furnaces, chemical or galvanic baths, painting in autoclave, etc.) has been analysed with the purpose of reducing the number of tardy jobs (and the makespan); in Graham’s notation: FPB(m_1, m_2, … , m_S)|p-batch, STsi,b|SUM(Ui). Jobs are sorted dynamically (at each new delivery); batches are closed within sliding (or rolling) time windows and processed in parallel by multiple identical machines. Computation experiments have shown the better performance on benchmarks of the two proposed heuristics based on new formulations of the critical ratio (CRsetup) considering the ratio of allowance set-up and processing time in the scheduling horizon, which improves the weighted modified operation due date rule

    Application of the modified finite particle method to the simulation of the corneal air puff test

    Get PDF
    We present a numerical procedure for the simulation of the air puff test, a medical procedure used by ophtalmologists for the identification of the Intra Ocular Pressure, and potentially useful for the identification of material properties of the human cornea. The problem involves the modeling of the cornea, that is a biological tissue, modelled as an hyperelastic material, and the aqueous humor, that is, the fluid filling the anterior chamber of the eye, that is treated as a Newtonian fluid, and modelled using a meshfree formulation, useful for the solution of a Fluid-Structure Interaction problem. Fluid and Structure are coupled using a Dirichlet-Neumann iterative approach, which permits the adoption of a partitioned coupling approach and explicit, fast solvers for the different subproblems

    Applying CHC algorithms on radio network design for wireless communication

    Get PDF
    The diff usion of wireless communication services (telephone, internet, etc.) is continuously growing these days. Unfortunately, the cost of the equipment to provide the service with the appropriate quality is high. Thus, selecting a set of geographical points allowing optimum coverage of a radio frequency signal by minimizing the use of resources is essential. The above task is called the Radio Network Design (RND) and is a NP-hard problem, i.e., this can be approached by using metaheuristics techniques. Metaheuristics are methods comprising local improvement procedures and high-level strategies for a robust search in the problem space. In this work, different versions of the CHC algorithm with a fitness function based on the efficiency of resource use are proposed. The achieved results are encouraging in terms of efficiency and quality in all the analysed scenarios.XV Workshop de Agentes y Sistemas InteligentesRed de Universidades con Carreras de Informática (RedUNCI

    Applying CHC algorithms on radio network design for wireless communication

    Get PDF
    The diff usion of wireless communication services (telephone, internet, etc.) is continuously growing these days. Unfortunately, the cost of the equipment to provide the service with the appropriate quality is high. Thus, selecting a set of geographical points allowing optimum coverage of a radio frequency signal by minimizing the use of resources is essential. The above task is called the Radio Network Design (RND) and is a NP-hard problem, i.e., this can be approached by using metaheuristics techniques. Metaheuristics are methods comprising local improvement procedures and high-level strategies for a robust search in the problem space. In this work, different versions of the CHC algorithm with a fitness function based on the efficiency of resource use are proposed. The achieved results are encouraging in terms of efficiency and quality in all the analysed scenarios.XV Workshop de Agentes y Sistemas InteligentesRed de Universidades con Carreras de Informática (RedUNCI

    Hibridización de K-Means a través de técnicas metaheurísticas

    Get PDF
    En los últimos años, ha existido un gran crecimiento en nuestras capacidades de generar y colectar datos, debido básicamente al gran poder de procesamiento de las máquinas y a su bajo costo de almacenamiento. Sin embargo, dentro de estas enormes masas de datos existe una gran cantidad de información “oculta”, de gran importancia estratégica, a la que no se puede acceder por las técnicas clásicas de recuperación de la información. La Minería de Datos implica “escabar”en esa inmensidad de datos, en búsqueda de patrones, asociaciones o predicciones que permitan transformar esa maraña de datos en información útil. Una de las tareas utilizadas en minería de datos es el clustering (agrupamiento) y un algoritmo muy popular y simple usado en esta tarea es K-means. A pesar de su popularidad el mencionado algoritmo sufre de algunas dificultades. K-means requiere varias iteraciones sobre todo el conjunto de datos, lo cual puede hacerlo muy costoso computacionalmente cuando se lo aplica a grandes bases de datos, el número de clusters K debe ser suministrado por el usuario y la búsqueda es propensa a quedar atrapada en mínimos locales. Se pretende a través de esta línea de investigación desarrollar técnicas avanzadas o mejoradas de minería de datos, particularmente en la tarea de clustering y además, proponer mejoras al algoritmo de K-means basándose en la aplicación de técnicas Metaheurísticas.Eje: Agentes y Sistemas InteligentesRed de Universidades con Carreras en Informática (RedUNCI
    corecore